冠心病(CHD)是现代世界中死亡的主要原因。用于诊断和治疗CHD的现代分析工具的开发正在从科学界受到极大的关注。基于深度学习的算法,例如分割网络和检测器,通过及时分析患者的血管造影来协助医疗专业人员,在协助医疗专业人员方面发挥着重要作用。本文着重于X射线冠状动脉造影(XCA),该血管造影被认为是CHD诊断和治疗中的“黄金标准”。首先,我们描述了XCA图像的公开可用数据集。然后,审查了图像预处理的经典和现代技术。此外,讨论了共同的框架选择技术,这是输入质量以及模型性能的重要因素。在以下两章中,我们讨论了现代血管分割和狭窄检测网络,最后是当前最新技术的开放问题和当前局限性。
translated by 谷歌翻译
Hierarchical Reinforcement Learning (HRL) algorithms have been demonstrated to perform well on high-dimensional decision making and robotic control tasks. However, because they solely optimize for rewards, the agent tends to search the same space redundantly. This problem reduces the speed of learning and achieved reward. In this work, we present an Off-Policy HRL algorithm that maximizes entropy for efficient exploration. The algorithm learns a temporally abstracted low-level policy and is able to explore broadly through the addition of entropy to the high-level. The novelty of this work is the theoretical motivation of adding entropy to the RL objective in the HRL setting. We empirically show that the entropy can be added to both levels if the Kullback-Leibler (KL) divergence between consecutive updates of the low-level policy is sufficiently small. We performed an ablative study to analyze the effects of entropy on hierarchy, in which adding entropy to high-level emerged as the most desirable configuration. Furthermore, a higher temperature in the low-level leads to Q-value overestimation and increases the stochasticity of the environment that the high-level operates on, making learning more challenging. Our method, SHIRO, surpasses state-of-the-art performance on a range of simulated robotic control benchmark tasks and requires minimal tuning.
translated by 谷歌翻译
Image segmentation is a fundamental task in computer vision. Data annotation for training supervised methods can be labor-intensive, motivating unsupervised methods. Some existing approaches extract deep features from pre-trained networks and build a graph to apply classical clustering methods (e.g., $k$-means and normalized-cuts) as a post-processing stage. These techniques reduce the high-dimensional information encoded in the features to pair-wise scalar affinities. In this work, we replace classical clustering algorithms with a lightweight Graph Neural Network (GNN) trained to achieve the same clustering objective function. However, in contrast to existing approaches, we feed the GNN not only the pair-wise affinities between local image features but also the raw features themselves. Maintaining this connection between the raw feature and the clustering goal allows to perform part semantic segmentation implicitly, without requiring additional post-processing steps. We demonstrate how classical clustering objectives can be formulated as self-supervised loss functions for training our image segmentation GNN. Additionally, we use the Correlation-Clustering (CC) objective to perform clustering without defining the number of clusters ($k$-less clustering). We apply the proposed method for object localization, segmentation, and semantic part segmentation tasks, surpassing state-of-the-art performance on multiple benchmarks.
translated by 谷歌翻译
We introduce the concepts of inverse solvability and security for a generic linear forward model and demonstrate how they can be applied to models used in federated learning. We provide examples of such models which differ in the resulting inverse solvability and security as defined in this paper. We also show how the large number of users participating in a given iteration of federated learning can be leveraged to increase both solvability and security. Finally, we discuss possible extensions of the presented concepts including the nonlinear case.
translated by 谷歌翻译
Choosing the values of hyper-parameters in sparse Bayesian learning (SBL) can significantly impact performance. However, the hyper-parameters are normally tuned manually, which is often a difficult task. Most recently, effective automatic hyper-parameter tuning was achieved by using an empirical auto-tuner. In this work, we address the issue of hyper-parameter auto-tuning using neural network (NN)-based learning. Inspired by the empirical auto-tuner, we design and learn a NN-based auto-tuner, and show that considerable improvement in convergence rate and recovery performance can be achieved.
translated by 谷歌翻译
We revisit the performance of the classic gradual magnitude pruning (GMP) baseline for large language models, focusing on the classic BERT benchmark on various popular tasks. Despite existing evidence in the literature that GMP performs poorly, we show that a simple and general variant, which we call GMP*, can match and sometimes outperform more complex state-of-the-art methods. Our results provide a simple yet strong baseline for future work, highlight the importance of parameter tuning for baselines, and even improve the performance of the state-of-the-art second-order pruning method in this setting.
translated by 谷歌翻译
Motion planning is challenging for autonomous systems in multi-obstacle environments due to nonconvex collision avoidance constraints. Directly applying numerical solvers to these nonconvex formulations fails to exploit the constraint structures, resulting in excessive computation time. In this paper, we present an accelerated collision-free motion planner, namely regularized dual alternating direction method of multipliers (RDADMM or RDA for short), for the model predictive control (MPC) based motion planning problem. The proposed RDA addresses nonconvex motion planning via solving a smooth biconvex reformulation via duality and allows the collision avoidance constraints to be computed in parallel for each obstacle to reduce computation time significantly. We validate the performance of the RDA planner through path-tracking experiments with car-like robots in simulation and real world setting. Experimental results show that the proposed methods can generate smooth collision-free trajectories with less computation time compared with other benchmarks and perform robustly in cluttered environments.
translated by 谷歌翻译
在许多工程应用中,例如雷达/声纳/超声成像等许多工程应用中,稀疏多通道盲卷(S-MBD)的问题经常出现。为了降低其计算和实施成本,我们提出了一种压缩方法,该方法可以及时从更少的测量值中进行盲目恢复。提出的压缩通过过滤器随后进行亚采样来测量信号,从而大大降低了实施成本。我们得出理论保证,可从压缩测量中识别和回收稀疏过滤器。我们的结果允许设计广泛的压缩过滤器。然后,我们提出了一个由数据驱动的展开的学习框架,以学习压缩过滤器并解决S-MBD问题。编码器是一个经常性的推理网络,该网络将压缩测量结果映射到稀疏过滤器的估计值中。我们证明,与基于优化的方法相比,我们展开的学习方法对源形状的选择更为强大,并且具有更好的恢复性能。最后,在具有有限数据的应用程序(少数图)的应用中,我们强调了与传统深度学习相比,展开学习的卓越概括能力。
translated by 谷歌翻译
我们实施和解释各种有涉及实际二次领域的监督学习实验,具有1、2和3。我们从数据科学的角度量化了匹配/不同奇偶校验的类别的相对困难,应用功能分析的方法论组件分析,并使用符号分类来开发适用于我们数据集的1、2和3类的机器学习公式。
translated by 谷歌翻译
人工智能(AI)是21世纪最有前途的技术之一,对社会和经济产生了明显影响。通过这项工作,我们简要概述了全球趋势,行业应用以及我们在工业和学术界的国际经验和工作中的精选用例。目的是提出全球和地区的积极实践,并就将B&H定位在全球AI场景中定位的现实目标和机会提供明智的意见。
translated by 谷歌翻译